skip to main content


Search for: All records

Creators/Authors contains: "Fortes, Jose A.B."

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Information Extraction (IE) from imaged text is affected by the output quality of the text-recognition process. Misspelled or missing text may propagate errors or even preclude IE. Low confidence in automated methods is the reason why some IE projects rely exclusively on human work (crowdsourcing). That is the case of biological collections (biocollections), where the metadata (Darwin-core Terms) found in digitized labels are transcribed by citizen scientists. In this paper, we present an approach to reduce the number of crowdsourcing tasks required to obtain the transcription of the text found in biocollections' images. By using an ensemble of Optical Character Recognition (OCR) engines - OCRopus, Tesseract, and the Google Cloud OCR - our approach identifies the lines and characters that have a high probability of being correct. This reduces the need for crowdsourced transcription to be done for only low confidence fragments of text. The number of lines to transcribe is also reduced through hybrid human-machine crowdsourcing where the output of the ensemble of OCRs is used as the first "human" transcription of the redundant crowdsourcing process. Our approach was tested in six biocollections (2,966 images), reducing the number of crowdsourcing tasks by 76% (58% due to lines accepted by the ensemble of OCRs and about 18% due to accelerated convergence when using hybrid crowdsourcing). The automatically extracted text presented a character error rate of 0.001 (0.1%). 
    more » « less
  2. DOI 10.1109/CANDARW.2019.00093 
    more » « less
  3. Biological collections store information with broad societal and environmental impact. In the last 15 years, after worldwide investments and crowdsourcing efforts, 25% of the collected specimens have been digitized; a process that includes the imaging of text attached to specimens and subsequent extraction of information from the resulting image. This information extraction (IE) process is complex, thus slow and typically involving human tasks. We propose a hybrid (Human-Machine) information extraction model that efficiently uses resources of different cost (machines, volunteers and/or experts) and speeds up the biocollections' digitization process, while striving to maintain the same quality as human-only IE processes. In the proposed model, called SELFIE, self-aware IE processes determine whether their output quality is satisfactory. If the quality is unsatisfactory, additional or alternative processes that yield higher quality output at higher cost are triggered. The effectiveness of this model is demonstrated by three SELFIE workflows for the extraction of Darwin-core terms from specimens' images. Compared to the traditional human-driven IE approach, SELFIE workflows showed, on average, a reduction of 27% in the information-capture time and a decrease of 32% in the required number of humans and their associated cost, while the quality of the results was negligibly reduced by 0.27%. 
    more » « less
  4. Citizen science projects have successfully taken advantage of volunteers to unlock scientific information contained in images. Crowds extract scientific data by completing different types of activities: transcribing text, selecting values from pre-defined options, reading data aloud, or pointing and clicking at graphical elements. While designing crowdsourcing tasks, selecting the best form of input and task granularity is essential for keeping the volunteers engaged and maximizing the quality of the results. In the context of biocollections information extraction, this study compares three interface actions (transcribe, select, and crop) and tasks of different levels of granularity (single field vs. compound tasks). Using 30 crowdsourcing experiments and two different populations, these interface alternatives are evaluated in terms of speed, quality, perceived difficulty and enjoyability. The results show that Selection and Transcription tasks generate high quality output, but they are perceived as boring. Conversely, Cropping tasks, and arguably graphical tasks in general, are more enjoyable, but their output quality depend on additional machine-oriented processing. When the text to be extracted is longer than two or three words, Transcription is slower than Selection and Cropping. When using compound tasks, the overall time required for the crowdsourcing experiment is considerably shorter than using single field tasks, but they are perceived as more difficult. When using single field tasks, both the quality of the output and the amount of identified data are slightly higher compared to compound tasks, but they are perceived by the crowd as less entertaining. 
    more » « less